62 research outputs found

    Probing Leptogenesis at Future Colliders

    Get PDF
    We investigate the question whether leptogenesis, as a mechanism for explaining the baryon asymmetry of the universe, can be tested at future colliders. Focusing on the minimal scenario of two right-handed neutrinos, we identify the allowed parameter space for successful leptogenesis in the heavy neutrino mass range between 55 and 5050 GeV. Our calculation includes the lepton flavour violating contribution from heavy neutrino oscillations as well as the lepton number violating contribution from Higgs decays to the baryon asymmetry of the universe. We confront this parameter space region with the discovery potential for heavy neutrinos at future lepton colliders, which can be very sensitive in this mass range via displaced vertex searches. Beyond the discovery of heavy neutrinos, we study the precision at which the flavour-dependent active-sterile mixing angles can be measured. The measurement of these mixing angles at future colliders can test whether a minimal type I seesaw mechanism is the origin of the light neutrino masses, and it can be a first step towards probing leptogenesis as the mechanism of baryogenesis. We discuss how a stronger test could be achieved with an additional measurement of the heavy neutrino mass difference.Comment: 30 pages plus appendix, 13 figures, references added, discussion extended, two figures added, matches journal versio

    An image processing pipeline to segment iris for unconstrained cow identification system

    Get PDF
    One of the most evident costs in cow farming is the identification of the animals. Classic identification processes are labour-intensive, prone to human errors and invasive for the animal. An automated alternative is an animal identification based on unique biometric patterns like iris recognition; in this context, correct segmentation of the region of interest becomes of critical importance. This work introduces a bovine iris segmentation pipeline that processes images taken in the wild, extracting the iris region. The solution deals with images taken with a regular visible-light camera in real scenarios, where reflections in the iris and camera flash introduce a high level of noise that makes the segmentation procedure challenging. Traditional segmentation techniques for the human iris are not applicable given the nature of the bovine eye; at this aim, a dataset composed of catalogued images and manually labelled ground truth data of Aberdeen-Angus has been used for the experiments and made publicly available. The unique ID number for each different animal in the dataset is provided, making it suitable for recognition tasks. Segmentation results have been validated with our dataset showing high reliability: with the most pessimistic metric (i.e. intersection over union), a mean score of 0.8957 has been obtained.Fil: Larregui, Juan Ignacio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación; Argentina. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación; ArgentinaFil: Cazzato, Dario. : University Of Luxembourg; Luxemburgo. Interdisciplinary Centre For Security Reliability And T; LuxemburgoFil: Castro, Silvia Mabel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación; Argentina. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación; Argentin

    A study on different experimental configurations for age, race, and gender estimation problems

    Get PDF
    This paper presents a detailed study about different algorithmic configurations for estimating soft biometric traits. In particular, a recently introduced common framework is the starting point of the study: it includes an initial facial detection, the subsequent facial traits description, the data reduction step, and the final classification step. The algorithmic configurations are featured by different descriptors and different strategies to build the training dataset and to scale the data in input to the classifier. Experimental proofs have been carried out on both publicly available datasets and image sequences specifically acquired in order to evaluate the performance even under real-world conditions, i.e., in the presence of scaling and rotation

    When I Look into Your Eyes: A Survey on Computer Vision Contributions for Human Gaze Estimation and Tracking

    Get PDF
    The automatic detection of eye positions, their temporal consistency, and their mapping into a line of sight in the real world (to find where a person is looking at) is reported in the scientific literature as gaze tracking. This has become a very hot topic in the field of computer vision during the last decades, with a surprising and continuously growing number of application fields. A very long journey has been made from the first pioneering works, and this continuous search for more accurate solutions process has been further boosted in the last decade when deep neural networks have revolutionized the whole machine learning area, and gaze tracking as well. In this arena, it is being increasingly useful to find guidance through survey/review articles collecting most relevant works and putting clear pros and cons of existing techniques, also by introducing a precise taxonomy. This kind of manuscripts allows researchers and technicians to choose the better way to move towards their application or scientific goals. In the literature, there exist holistic and specifically technological survey documents (even if not updated), but, unfortunately, there is not an overview discussing how the great advancements in computer vision have impacted gaze tracking. Thus, this work represents an attempt to fill this gap, also introducing a wider point of view that brings to a new taxonomy (extending the consolidated ones) by considering gaze tracking as a more exhaustive task that aims at estimating gaze target from different perspectives: from the eye of the beholder (first-person view), from an external camera framing the beholder’s, from a third-person view looking at the scene where the beholder is placed in, and from an external view independent from the beholder

    automatic joint attention detection during interaction with a humanoid robot

    Get PDF
    Joint attention is an early-developing social-communicative skill in which two people (usually a young child and an adult) share attention with regards to an interesting object or event, by means of gestures and gaze, and its presence is a key element in evaluating the therapy in the case of autism spectrum disorders. In this work, a novel automatic system able to detect joint attention by using completely non-intrusive depth camera installed on the room ceiling is presented. In particular, in a scenario where a humanoid-robot, a therapist (or a parent) and a child are interacting, the system can detect the social interaction between them. Specifically, a depth camera mounted on the top of a room is employed to detect, first of all, the arising event to be monitored (performed by an humanoid robot) and, subsequently, to detect the eventual joint attention mechanism analyzing the orientation of the head. The system operates in real-time, providing to the therapist a completely non-intrusive instrument to help him to evaluate the quality and the precise modalities of this predominant feature during the therapy session

    A Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles

    Get PDF
    The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed
    • …
    corecore